Learning Optimal Features via Partial Invariance
نویسندگان
چکیده
Learning models that are robust to distribution shifts is a key concern in the context of their real-life applicability. Invariant Risk Minimization (IRM) popular framework aims learn from multiple environments. The success IRM requires an important assumption: underlying causal mechanisms/features remain invariant across When not satisfied, we show can over-constrain predictor and remedy this, propose relaxation via partial invariance. In this work, theoretically highlight sub-optimality then demonstrate how learning partition training domains help improve models. Several experiments, conducted both linear settings as well with deep neural networks on tasks over language image data, allow us verify our conclusions.
منابع مشابه
Optimal Partial-Order Plan Relaxation via MaxSAT
Partial-order plans (POPs) are attractive because of their least-commitment nature, which provides enhanced plan flexibility at execution time relative to sequential plans. Current research on automated plan generation focuses on producing sequential plans, despite the appeal of POPs. In this paper we examine POP generation by relaxing or modifying the action orderings of a sequential plan to o...
متن کاملPartial Oblique Projection Learning for Optimal Generalization
In practice, it is necessary to implement an incremental and active learning for a learning method. In terms of such implementation, this paper shows that the previously discussed S-L projection learning is inappropriate to constructing a family of projection learning, and proposes a new version called partial oblique projection (POP) learning. In POP learning, a function space is decomposed in...
متن کاملLearning optimal features for visual pattern recognition
The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii) propose to evaluate the different models by ver...
متن کاملTransfer in Reinforcement Learning via Shared Features
We present a framework for transfer in reinforcement learning based on the idea that related tasks share some common features, and that transfer can be achieved via those shared features. The framework attempts to capture the notion of tasks that are related but distinct, and provides some insight into when transfer can be usefully applied to a problem sequence and when it cannot. We apply the ...
متن کاملLearning Graph Topological Features via GAN
Inspired by the generation power of generative adversarial networks (GANs) in image domains, we introduce a novel hierarchical architecture for learning characteristic topological features from a single arbitrary input graph via GANs. The hierarchical architecture consisting of multiple GANs preserves both local and global topological features and automatically partitions the input graph into r...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i6.25875